![]() METHOD AND APPARATUS FOR PROCESSING A VIDEO SIGN (Machine-translation by Google Translate, not legal
专利摘要:
Method and apparatus for processing a video signal. A method for decoding a video according to the present invention may comprise: obtaining a weighted prediction parameter of a current, determined block, based on the weighted prediction parameter, applying weights to a first prediction block generated based on a first reference image and a second prediction block generated based on a second reference image, and obtain, based on a weighted sum of the first prediction block and the second prediction block, a final prediction block of the current block. (Machine-translation by Google Translate, not legally binding) 公开号:ES2737843A2 申请号:ES201931076 申请日:2017-06-30 公开日:2020-01-16 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0001] [0002] Method and apparatus for processing a video signal [0003] [0004] Technical field [0005] [0006] The present invention relates to a method and an apparatus for processing video signals. [0007] [0008] Prior art [0009] [0010] Recently, the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images has increased in several fields of application. However, higher resolution and quality image data have increasing amounts of data compared to conventional image data. Therefore, when image data is transmitted using a medium such as conventional and wireless broadband networks, or when image data is stored using a conventional storage medium, transmission and storage costs increase. To solve these problems that occur with an increase in the resolution and quality of the image data, high efficiency image coding / decoding techniques can be used. [0011] [0012] Image compression technology includes several techniques, including: an interprediction technique for predicting a pixel value included in a current image from a previous or subsequent image of the current image; an intraprediction technique for predicting a pixel value included in a current image by using pixel information in the current image; an entropy coding technique of assigning a short code to a value with a high occurrence frequency and assigning a long code to a value with a low occurrence frequency; etc. Image data can be compressed effectively using said image compression technology and can be transmitted or stored. [0013] Meanwhile, with the demand for high-resolution images, the demand for stereographic image content, which is a new imaging service, has also increased. A video compression technique is being debated to effectively provide stereographic image content with high resolution and ultra high resolution. [0014] [0015] Divulgation [0016] [0017] Technical problem [0018] [0019] An object of the present invention is intended to provide a method and an apparatus for effectively performing interprediction for an objective coding / decoding block in the encoding / decoding of a video signal. [0020] [0021] An object of the present invention is intended to provide a method and apparatus for determining in a variable / adaptive manner a weighting for each reference image in the encoding / decoding of a video signal, and making a bidirectional prediction based on an operation of Weighted sum of a plurality of prediction blocks. [0022] [0023] The objective of the present invention is to provide a method and apparatus for efficiently encoding / decoding a weighted prediction parameter to determine the weights that will be applied to both reference images when encoding / decoding a video signal. [0024] [0025] The technical objectives to be achieved by the present invention are not limited to the technical problems mentioned above. And other technical problems that are not mentioned will be clearly understood by those skilled in the art from the following description. [0026] [0027] Technical solution [0028] [0029] A method and an apparatus for decoding a video signal according to the present invention can obtain a weighted prediction parameter of a current block, determine the weights that apply to a first prediction block generated based on a first reference image and a second prediction block generated based on a second reference image based on the weighted prediction parameter and get a Final prediction block of the current block based on a weighted sum of the first prediction block and the second prediction block. [0030] [0031] A method and an apparatus for encoding a video signal in accordance with the present invention can determine the weights that apply to a first prediction block generated based on a first reference image and a second prediction block generated based on a second reference image based on a weighted prediction parameter of a current block, and generate a final prediction block of the current block based on a weighted sum of the first prediction block and the second prediction block. [0032] [0033] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, the weighted prediction parameter can be determined one of a candidate weighted prediction parameter, specified by index information between a plurality of weighted prediction parameters candidates [0034] [0035] In the method and apparatus for encoding / decoding a video signal according to the present invention, the index information can be binarized with a truncated unary binarization. [0036] [0037] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, a bit length of the index information can be determined based on whether the temporal orders of the first reference image and the second image of Reference are the same. [0038] [0039] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, a bit length of the index information can be determined based on whether at least one of the distance between the first reference image and the current image that includes the current block and a distance between the second reference image and the current image are the same. [0040] [0041] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, the weighted prediction parameter can be determined as one of the candidate prediction parameters included in a set of weighted prediction parameters of the current block. [0042] [0043] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, the set of weighted prediction parameters can be determined based on at least a distance between the first reference image and the current image, including the current block, or a distance between the second reference image and the current image. [0044] [0045] In the method and apparatus for encoding / decoding a video signal in accordance with the present invention, the set of weighted prediction parameters can be determined based on whether the temporal addresses of the first reference image and the second reference image are same. [0046] [0047] In the method and apparatus for encoding / decoding a video signal according to the present invention, the weighted prediction parameter of the current block can be derived from a neighboring block adjacent to the current block. [0048] [0049] In the method and apparatus for encoding / decoding a video signal according to the present invention, the weighted prediction parameter of the current block can be determined based on a temporal order difference between a current image and the first reference image and a temporal order difference between the current image and the second reference image. [0050] [0051] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention that follows, but do not limit the scope of the invention. [0052] Advantageous effects [0053] [0054] In accordance with the present invention, efficient interprediction can be performed for an objective coding / decoding block. [0055] [0056] According to the present invention, a weighting can be determined in a variable / adaptive manner for each reference image when a bidirectional prediction is made. [0057] [0058] According to the present invention, a weighted prediction parameter to determine the weights applied to both reference images can be encoded / decoded efficiently. [0059] [0060] The effects obtainable by the present invention are not limited to the effects mentioned above, and those skilled in the art can clearly understand other effects not mentioned from the following description. [0061] [0062] Description of the drawings [0063] [0064] Fig. 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention. [0065] [0066] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0067] [0068] Fig. 3 is a diagram illustrating an example of hierarchical partition of an encoding block based on a tree structure according to an embodiment of the present invention. [0069] [0070] Figure 4 is a diagram illustrating a type of partition in which binary tree-based partition is allowed according to an embodiment of the present invention. [0071] [0072] Figure 5 is a diagram illustrating an example in which only one binary tree-based partition of a predetermined type according to an embodiment of the present invention. [0073] [0074] Figure 6 is a diagram to explain an example in which information related to the allowed number of binary tree partition is encoded / decoded, in accordance with an embodiment to which the present invention is applied. [0075] [0076] Figure 7 is a diagram illustrating a partition mode applicable to an encoding block according to an embodiment of the present invention. [0077] [0078] Figure 8 is a flow chart illustrating processes of obtaining a residual sample according to an embodiment to which the present invention is applied. [0079] [0080] Figure 9 is a flow chart illustrating an interprediction method according to an embodiment to which the present invention is applied. [0081] [0082] Figure 10 is a diagram illustrating the processes of deriving motion information from a current block when a fusion mode is applied to the current block. [0083] [0084] Figure 11 is a diagram illustrating processes of derivation of movement information from a current block when an AMVP mode is applied to the current block. [0085] [0086] Figure 12 is a diagram of a two-way weighted prediction method, in accordance with an embodiment of the present invention. [0087] [0088] Figure 13 is a diagram to explain a principle of two-way weighted prediction. [0089] [0090] Figure 14 is a diagram illustrating an order of exploration between neighboring blocks. [0091] Mode of the invention [0092] [0093] A variety of modifications to the present invention can be made and there are various embodiments of the present invention, the examples of which will now be provided with reference to the drawings and will be described in detail. However, the present invention is not limited to this, and exemplary embodiments may be construed as including all modifications, equivalents or substitutes in a technical concept and a technical scope of the present invention. Similar reference numbers refer to the similar element described in the drawings. [0094] [0095] The terms used in the specification, 'first', 'second', etc. they can be used to describe several components, but the components should not be construed as limited to the terms. The terms are only used to differentiate one component from other components. For example, the "first" component may be called the "second" component without departing from the scope of the present invention, and the "second" component may also be referred to similarly as the "first" component. The term 'and / or' includes a combination of a plurality of elements or any of a plurality of terms. [0096] [0097] It will be understood that when simply referring to an element as 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it may be 'directly' connected to 'or' directly coupled to 'another element or be connected or coupled to another element, having the other element intervening between them. On the contrary, when an element is referred to as "directly connected" or "directly coupled" to another characteristic or element, no intermediate elements are present. [0098] [0099] The terms used herein are simply used to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in the context. In the present specification, it should be understood that terms such as "including", "having", etc. They are intended to indicate the existence of the characteristics, numbers, stages, actions, elements, parts or combinations thereof that are described in the specification, and are not intended to exclude the possibility that one or more characteristics, numbers, stages, actions, elements, parts or combinations thereof may exist or be added. [0100] [0101] In the following, the embodiments of the present invention will be described in detail herein with reference to the accompanying drawings. Hereinafter, the same constituent elements in the drawings are denoted with the same reference numbers, and a repeated description of the same elements will be omitted. [0102] [0103] Fig. 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention. [0104] [0105] With reference to Figure 1, the device 100 for encoding a video may include: an image partition module 110, prediction modules 120 and 125, a transformation module 130, a quantization module 135, a rearrangement module 160, an entropy coding module 165, a reverse quantization module 140, a reverse transformation module 145, a filter module 150 and a memory 155. [0106] [0107] The parts shown in Figure 1 are shown independently to represent characteristic functions different from each other in the device for encoding a video. Therefore, it does not mean that each constitutional part constitutes a separate constitutional unit of hardware or software. In other words, each constitutional part includes each of the constitutional parts listed for convenience. Therefore, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part can be divided into a plurality of constitutional parts to perform each function. The embodiment in which each constitutional part is combined and the embodiment in which a constitutional part is divided are also included within the scope of the present invention, if they do not depart from the essence of the present invention. [0108] In addition, some of the constituents may not be indispensable constituents that perform essential functions of the present invention, but may be selective constituents that only improve their performance. The present invention can be implemented including only the constitutional parts indispensable for implementing the essence of the present invention, except the constituents used to improve performance. The structure that includes only the essential constituents, except the selective constituents used to improve performance only, is also included within the scope of the present invention. [0109] [0110] The image partition module 110 can divide an input image into one or more processing units. Here, the processing unit can be a prediction unit (PU), a transformation unit (TU) or an encoding unit (CU). The image partition module 110 can divide an image into combinations of multiple coding units, prediction units and transformation units, and can encode an image by selecting a combination of coding units, prediction units and transformation units with one criterion default (for example, cost function). [0111] [0112] For example, an image can be divided into several coding units. A recursive tree structure, such as a quad-tree structure, can be used to divide an image into coding units. An encoding unit that is divided into other encoding units with an image or a larger coding unit as root can be partitioned with secondary nodes corresponding to the number of partitioned encoding units. An encoding unit that is no longer partitioned by a predetermined limitation serves as a leaf node. That is, when it is assumed that only one square partition is possible for one coding unit, one coding unit can be divided into a maximum of four other coding units. [0113] [0114] Hereinafter, in the embodiment of the present invention, the coding unit may mean a unit that performs the coding, or a unit that performs the decoding. [0115] A prediction unit can be one of the partitions divided into a square or rectangular shape that has the same size in a single coding unit, or a prediction unit can be one of the partitions partitioned to have a different shape / size in a single coding unit. [0116] [0117] When a prediction unit subject to intraprediction based on a coding unit is generated and the coding unit is not the smallest coding unit, intraprediction can be performed without dividing the coding unit into multiple NxN prediction units. [0118] [0119] The prediction modules 120 and 125 may include an interprediction module 120 that performs the interprediction and an intraprediction module 125 that performs the intraprediction. It is possible to determine whether to perform the interprediction or intraprediction for the prediction unit, and detailed information (for example, an intraprediction mode, a motion vector, a reference image, etc.) can be determined according to each method of prediction. Here, the processing unit subject to prediction may be different from the processing unit for which the prediction method and detailed content is determined. For example, the prediction method, the prediction mode, etc. they can be determined by the prediction unit, and the transformation unit can make the prediction. A residual value (residual block) can be entered between the generated prediction block and an original block to the transformation module 130. In addition, the prediction mode information, the motion vector information, etc. used for prediction can be encoded with the residual value by the entropy coding module 165 and can be transmitted to a device to decode a video. When a particular coding mode is used, it is possible to transmit to a device to decode video encoding the original block as it is without generating the prediction block through the prediction modules 120 and 125. [0120] [0121] The interprediction module 120 can predict the prediction unit based on the information of at least one of a previous image or a subsequent image of the current image, or you can predict the prediction unit based on information from some regions encoded in the current image, in some cases. The interprediction module 120 may include a reference image interpolation module, a motion prediction module and a motion compensation module. [0122] [0123] The reference image interpolation module can receive reference image information from memory 155 and can generate pixel information of an entire pixel or less than the entire pixel of the reference image. In the case of luminance pixels, an 8-touch DCT-based interpolation filter with different filter coefficients can be used to generate pixel information of an entire pixel or less than an entire pixel in 1/4 pixel units. In the case of chroma signals, a 4-touch DCT-based interpolation filter that has a different filter coefficient can be used to generate pixel information of an entire pixel or less than an entire pixel in units of 1/8 of pixel [0124] [0125] The motion prediction module can perform motion prediction based on the reference image interpolated by the reference image interpolation module. As methods for calculating a motion vector, several methods can be used, such as a block matching algorithm based on full search (FBMA), a three stage search (TSS), a new three stage search algorithm (NTS) , etc. The motion vector can have a motion vector value in units of 1/2 pixel or 1/4 pixel based on an interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction method. As methods of motion prediction, various methods can be used, such as the skip method, the combination method, the AMVP (advanced motion vector prediction) method, the intrablock copy method, etc. [0126] [0127] The intraprediction module 125 may generate a prediction unit based on reference pixel information adjacent to a current block that is pixel information in the current image. When the neighboring block of the current prediction unit is a block under interprediction and, therefore Therefore, a reference pixel is a pixel under interprediction, the reference pixel included in the block under interprediction can be replaced by reference pixel information from a neighboring block subject to intraprediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels may be used instead of reference pixel information not available. [0128] [0129] Intra-prediction prediction modes may include a directional prediction mode that uses reference pixel information that depends on the direction of the prediction and a non-directional prediction mode that does not use directional information to make the prediction. A way to predict luminance information may be different from one way to predict chroma information, and to predict chroma information, the intraprediction mode information used to predict luminance information or signal information may be used. predicted luminance. [0130] [0131] When performing the intraprediction, when the size of the prediction unit is the same as the size of the transformation unit, the intraprediction can be performed in the prediction unit based on the pixels located on the left, top left and above unit prediction. However, when performing intraprediction, when the size of the prediction unit is different from the size of the transformation unit, the intraprediction can be performed using a reference pixel based on the transformation unit. In addition, intraprediction using NxN partition can be used only for the smallest coding unit. [0132] [0133] In the intraprediction method, a prediction block can be generated after applying an AIS filter (intra-adaptive filtering) to a reference pixel depending on the prediction modes. The type of AIS filter applied to the reference pixel may vary. To perform the intraprediction method, an intraprediction mode of the current prediction unit can be predicted from the intraprediction mode of the prediction unit adjacent to the current prediction unit. In predicting the prediction mode of the Current prediction unit by using predicted mode information from the neighboring prediction unit, when the intraprediction mode of the current prediction unit is the same as the intraprediction mode of the neighboring prediction unit, the information indicates that prediction modes of the current prediction unit and the neighboring prediction unit are equal to each other can be transmitted using predetermined flag information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, entropy coding can be performed to encode the prediction mode information of the current block. [0134] [0135] In addition, a residual block that includes information on a residual value that is different between the predicted prediction unit and the original prediction unit block can be generated based on the prediction units generated by the prediction modules 120 and 125. The generated residual block can be introduced in the transformation module 130. [0136] [0137] The transformation module 130 can transform the residual block that includes information on the residual value between the original block and the prediction unit generated by the prediction modules 120 and 125 by using a transformation method, such as the discrete transform of cosine (DCT), discrete sinus transform (DST), and KLT. The application of DCT, DST or KLT to transform the residual block can be determined based on the information of the intraprediction mode of the prediction unit used to generate the residual block. [0138] [0139] The quantization module 135 can quantify the values transformed into a frequency domain by the transformation module 130. The quantification coefficients may vary depending on the block or the importance of an image. The values calculated by the quantization module 135 can be provided to the inverse quantization module 140 and the rearrangement module 160. [0140] [0141] The rearrangement module 160 can rearrange the coefficients of the quantified residual values. [0142] [0143] The rearrangement module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scanning method. For example, the rearrangement module 160 can scan from a CC coefficient to a coefficient in a high frequency domain using a zigzag scanning method to change the coefficients so that they are in the form of one-dimensional vectors. Depending on the size of the transformation unit and the intraprediction mode, scanning in the vertical direction where the coefficients in the form of two-dimensional blocks are scanned in the column direction or scanning in the horizontal direction where the coefficients in the form of two-dimensional blocks are scanned in the direction of the row can be used instead of the zigzag scan. That is, the scanning method between zigzag scanning, scanning in the vertical direction and scanning in the horizontal direction can be determined according to the size of the transformation unit and the intraprediction mode. [0144] [0145] The entropy coding module 165 can perform the entropy coding based on the values calculated by the rearrangement module 160. The entropy coding can use various coding methods, for example, Golomb exponential coding, variable length coding context-adapted (CAVLC) and context-adapted binary arithmetic coding (CABAC). [0146] [0147] The entropy coding module 165 can encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit unit information prediction, transformation unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc. of the rearrangement module 160 and the prediction modules 120 and 125. [0148] [0149] The entropy coding module 165 can be encoded by entropy the input coefficients of the coding unit of the rearrangement module 160. [0150] The inverse quantization module 140 can inversely quantify the values quantified by the quantization module 135 and the inverse transformation module 145 can reverse transform the values transformed by the transformation module 130. The residual value generated by the inverse quantization module 140 and the inverse transformation module 145 can be combined with the prediction unit predicted by a motion estimation module, a motion compensation module and the intra-prediction module of the prediction modules 120 and 125, so that a block can be generated rebuilt. [0151] [0152] The filter module 150 may include at least one of an unlocking filter, a displacement correction unit and an adaptive loop filter (ALF). [0153] [0154] The unlocking filter can eliminate distortion of the block that occurs due to the boundaries between the blocks in the reconstructed image. To determine whether the unlocking should be performed, the pixels included in several rows or columns in the block can be a basis for determining whether to apply the unblocking filter to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied depending on the required unblocking filtering force. Furthermore, when applying the unblocking filter, the filtering in the horizontal direction and the filtering in the vertical direction can be processed in parallel. [0155] [0156] The offset correction module can correct the offset with the original image in units of one pixel in the image subject to unlocking. To perform the offset correction in a particular image, it is possible to use a method to apply the offset in consideration to the edge information of each pixel or a method of pixel partition of an image in the predetermined number of regions, determining a region to be subject to a displacement, and apply the displacement to the given region. [0157] Adaptive loop filtering (ALF) can be performed according to the value obtained by comparing the reconstructed filtered image and the original image. The pixels included in the image can be divided into predetermined groups, a filter that will be applied to each of the groups can be determined and individual filtering can be performed for each group. Information on whether to apply ALF and a light signal can be transmitted by coding units (CU). The shape and filter coefficient of a filter for ALF may vary depending on each block. In addition, the filter for ALF in the same way (fixed form) can be applied regardless of the characteristics of the application's target block. [0158] [0159] The memory 155 can store the reconstructed block or the image calculated through the filter module 150. The stored reconstructed block or image can be provided to the prediction modules 120 and 125 for interprediction. [0160] [0161] Figure 2 is a block diagram of a device for decoding a video in accordance with an embodiment of the present invention. [0162] [0163] With reference to Fig. 2, the device 200 for decoding a video may include: an entropy decoding module 210, a reorganization module 215, a reverse quantization module 220, a reverse transformation module 225, prediction modules 230 and 235, a filter module 240 and a memory 245. [0164] [0165] When a video bit stream is input from the device to encode a video, the input bit stream can be decoded according to an inverse process of the device to encode a video. [0166] [0167] The entropy decoding module 210 can perform the entropy decoding according to a reverse entropy coding process by the entropy coding module of the device for encoding a video. For example, according to the methods performed by the device to encode a video, several methods can be applied, such as Golomb exponential coding, context-adapted variable length coding (CAVLC) and context-adapted binary arithmetic coding (CABAC). [0168] [0169] The entropy decoding module 210 can decode information about the intraprediction and interprediction performed by the device to encode a video. [0170] [0171] The reordering module 215 can perform a reordering in the entropy of the bit stream decoded by the entropy decoding module 210 based on the reordering method used in the device to encode a video. The reorganization module can reconstruct and reorganize the coefficients in the form of one-dimensional vectors to the coefficient in the form of two-dimensional blocks. The rearrangement module 215 can receive information related to the scan of coefficients performed on the device to encode a video and can perform a rearrangement through a method of inverse scanning of the coefficients according to the scanning order performed on the device for encoding a video. [0172] [0173] The inverse quantization module 220 can perform inverse quantization based on a quantization parameter received from the device for encoding a video and the reorganized coefficients of the block. [0174] [0175] The inverse transformation module 225 can perform the inverse transformation, that is, inverse DCT, inverse DST and inverse KLT, which is the inverse process of the transformation, i.e., DCT, DST and KLT, performed by the transformation module in the Quantification result by the device to encode a video. The reverse transformation can be performed based on a transfer unit determined by the device to encode a video. The inverse transformation module 225 of the device for decoding a video can perform transformation schemes selectively (e.g., DCT, DST and KLT) depending on multiple data, such as the prediction method, the current block size, the prediction direction, etc. [0176] [0177] The prediction modules 230 and 235 can generate a prediction block based on information about the generation of prediction block received from the entropy decoding module 210 and the previously decoded image or block information received from memory 245. [0178] As described above, like the operation of the device for encoding a video, when performing intraprediction, when the size of the prediction unit is the same as the size of the transformation unit, intraprediction can be performed on the unit prediction based on pixels positioned to the left, top left and top of the prediction unit. When performing intraprediction, when the size of the prediction unit is different from the size of the transformation unit, intraprediction can be performed using a reference pixel based on the transformation unit. In addition, intraprediction using NxN partition can be used only for the smallest coding unit. [0179] [0180] The prediction modules 230 and 235 may include a prediction unit determination module, an interprediction module and an intraprediction module. The prediction unit determination module can receive a variety of information, such as the prediction unit information, the prediction mode information of an intraprediction method, the movement prediction information of an interprediction method, etc. of the entropy decoding module 210, it can divide a current encoding the unit into prediction units, and it can determine whether the interprediction or intraprediction is performed in the prediction unit. By using the information required in the interprediction of the current prediction unit received from the device to encode a video, the interprediction module 230 may perform the interprediction in the current prediction unit based on the information of at least one of a previous image or a subsequent image of the current image including the current prediction unit. Alternatively, interprediction can be performed based on the information of some previously reconstructed regions in the current image, including the current prediction unit. [0181] [0182] In order to perform the interprediction, it is possible to determine for the coding unit which of a jump mode, a combination mode, an AMVP mode and a cross-block copy mode is used as the prediction unit movement prediction method included in The coding unit. [0183] The intraprediction module 235 can generate a prediction block based on the pixel information in the current image. When the prediction unit is a prediction unit subject to intraprediction, intraprediction can be performed based on the intraprediction mode information of the prediction unit received from the device for encoding a video. The intraprediction module 235 may include an intra-adaptive filtering filter (AIS), a reference pixel interpolation module and a DC filter. The AIS filter filters the reference pixel of the current block, and the application of the filter can be determined according to the prediction mode of the current prediction unit. AIS filtering can be performed on the reference pixel of the current block using the prediction unit prediction mode and the AIS filter information received from the device to encode a video. When the prediction mode of the current block is a mode where AIS filtering is not performed, the AIS filter may not be applied. [0184] [0185] When the prediction unit prediction mode is a prediction mode where intraprediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of an entire pixel or less than an entire pixel. When the prediction mode of the current prediction unit is a prediction mode where a prediction block is generated without interpolation of the reference pixel, the reference pixel cannot be interpolated. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a CC mode. [0186] [0187] The reconstructed block or image may be provided to the filter module 240. The filter module 240 may include the unlocking filter, the offset correction module and the ALF. [0188] [0189] Information on whether or not the unblocking filter is applied to the corresponding block or image and information on which of the strong and weak filters is applied when the unblocking filter is applied can be received from the device for encoding a video. The device unlock filter for decoding a video can receive information about the unlocking filter of the device to encode a video, and can perform an unblocking filtering in the corresponding block. [0190] [0191] The shift correction module can perform the offset correction in the reconstructed image based on the type of offset correction and the offset value information applied to an image when encoding. [0192] [0193] The ALF can be applied to the encoding unit based on the information on whether the ALF should be applied, the ALF coefficient information, etc., received from the device to encode a video. ALF information can be provided as included in a particular set of parameters. [0194] [0195] Memory 245 may store the reconstructed image or block for use as an image or reference block, and may provide the reconstructed image to an output module. [0196] [0197] As described above, in the embodiment of the present invention, for the sake of explanation, the coding unit is used as a term that represents a unit for coding, but the coding unit can serve as a unit that performs decoding, as well as coding. [0198] [0199] In addition, a current block can represent an objective block to be encoded / decoded. And, the current block may represent a coding tree block (or a coding tree unit), a coding block (or a coding unit), a transformation block (or a transformation unit), a block of prediction (or a prediction unit), or the like, depending on a coding / decoding step. [0200] [0201] An image can be encoded / decoded by dividing into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a tree unit of coding. The coding tree unit can be defined as a coding unit of the largest size allowed within a sequence or a segment. Information on whether the coding tree unit has a square shape or if it has a non-square shape or information on the size of the coding tree unit can be signaled through a set of sequence parameters, a set of Image parameters or a segment header. The unit of the coding tree can be divided into smaller partitions. At this time, if it is assumed that the depth of a partition generated by dividing the coding tree unit is 1, the depth of a partition generated by dividing the partition that has depth 1 can be defined as 2. That is, a partition generated by dividing a partition that has a depth k in the coding tree unit can be defined as having a depth k + 1. [0202] [0203] An arbitrary size partition generated by dividing a coding tree unit can be defined as a coding unit. The coding unit can be recursively divided or divided into base units to perform prediction, quantification, transformation or loop filtering, and the like. For example, an arbitrary size partition generated by dividing the coding unit can be defined as a coding unit, or it can be defined as a transformation unit or a prediction unit, which is a base unit for predicting, quantifying , transformation or loop filtering and the like. [0204] [0205] The partition of a coding tree unit or a coding unit can be performed based on at least one of a vertical line and a horizontal line. In addition, the number of vertical lines or horizontal lines that divide the coding tree unit or the coding unit can be at least one or more. For example, the coding tree unit or the coding unit can be divided into two partitions using a vertical line or a horizontal line, or the coding tree unit or the coding unit can be divided into three partitions using two lines vertical or two horizontal lines. Alternatively, the coding tree unit or the coding unit can be divided into four partitions that have a length and width of 1/2 using a vertical line and a horizontal line. [0206] [0207] When a coding tree unit or a coding unit is divided into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions may have a uniform size or a different size. Alternatively, any partition can have a different size from the remaining partitions. [0208] [0209] In the embodiments described below, it is assumed that a coding tree unit or a coding unit is divided into a quad-tree structure or a binary tree structure. However, it is also possible to divide a coding tree unit or a coding unit using a larger number of vertical lines or a larger number of horizontal lines. [0210] [0211] Fig. 3 is a diagram illustrating an example of hierarchical partition of an encoding block based on a tree structure according to an embodiment of the present invention. [0212] [0213] An input video signal is decoded in predetermined block units. Such a predetermined unit for decoding the video input signal is an encoding block. The coding block can be a unit that performs intra / inter prediction, transformation and quantification. In addition, a prediction mode (for example, intraprediction mode or interprediction mode) is determined in units of a coding block, and the prediction blocks included in the coding block can share the determined prediction mode. The coding block can be a square or non-square block that has an arbitrary size in a range of 8x8 to 64x64, or it can be a square or non-square block that has a size of 128x128, 256x256 or more. [0214] [0215] Specifically, the coding block can be divided hierarchically based on at least one of a quad tree and a binary tree. Here, the quadruple tree-based partition can mean that a block of 2Nx2N coding is divided into four NxN coding blocks, and the binary tree-based partition can mean that one coding block is divided into two coding blocks. Even if the binary tree-based partition is performed, there can be a block of square-shaped coding in the lower depth. [0216] [0217] Binary tree-based partitioning can be done symmetrically or asymmetrically. The coding block divided according to the binary tree can be a square block or a non-square block, such as a rectangular shape. For example, a type of partition in which the binary tree-based partition is allowed can comprise at least one of a symmetric type of 2NxN (horizontal non-square directional coding unit) or Nx2N (vertical square non-square coding unit) , asymmetric type of nLx2N, nRx2N, 2NxnU or 2NxnD. [0218] [0219] The binary tree-based partition can be limited to a symmetric or asymmetric partition. In this case, the construction of the coding tree unit with square blocks may correspond to the quadruple tree CU partition, and the construction of the coding tree unit with non-square symmetric blocks may correspond to the binary tree division. The construction of the coding tree unit with square blocks and symmetric non-square blocks can correspond to the quadruple and binary tree CU partition. [0220] [0221] The binary tree-based partition can be performed in an encoding block where the quad-tree-based partition is no longer performed. The quad-tree-based partition can no longer be performed in the partitioned coding block based on the binary tree. [0222] [0223] In addition, the partition of a lower depth can be determined according to the type of partition of a higher depth. For example, if the binary tree-based partition is allowed at two or more depths, only the same type as the binary tree partition of the upper depth at the lower depth can be allowed. For example, if the binary tree-based partition at the upper depth is made with type 2NxN, the partition Binary tree-based at the lower depth is also performed with the type 2NxN. Alternatively, if the binary tree-based partition at the upper depth is performed with the Nx2N type, the binary tree-based partition at the lower depth is also performed with the Nx2N type. [0224] [0225] On the contrary, it is also possible to allow, at a lower depth, only a different type of a binary tree partition type of a greater depth. [0226] [0227] It may be possible to limit only one specific type of binary tree-based partition to be used for sequence, segment, coding tree unit or coding unit. As an example, only type 2NxN or type Nx2N of binary tree-based partition can be allowed for the encoding tree unit. An available partition type can be predefined in an encoder or decoder. Or the information about the type of partition available or about the type of partition not available can be encoded and then sent through a bit stream. [0228] [0229] Figure 5 is a diagram illustrating an example in which only a specific type of binary tree-based partition is allowed. Figure 5A shows an example in which only the Nx2N type of binary tree-based partition is allowed, and Figure 5B shows an example in which only the 2NxN type of binary tree-based partition is allowed. To implement the adaptive partition based on the quadruple tree or binary tree, the information indicating the quadruple-based partition can be used, the information on the size / depth of the coding block that the quad-tree-based partition allows, the information which indicates the binary tree-based partition, the information on the size / depth of the coding block that allows the binary tree-based partition, the information on the size / depth of the coding block in which the partition based on the binary tree, information about whether the binary tree-based partition is performed in a vertical direction or a horizontal direction, etc. [0230] [0231] In addition, information can be obtained on the number of times that it allows a binary tree partition, a depth at which the binary tree partition is allowed, or the number of depths at which the binary tree partition is allowed for a coding tree unit or a specific coding unit. The information can be encoded in units of a coding tree unit or an encoding unit, and can be transmitted to a decoder through a bit stream. [0232] [0233] For example, a 'max_binary_depth_idx_minus1' syntax that indicates a maximum depth to which the binary tree partition is allowed can be encoded / decoded through a bit stream. In this case, max_binary_depth_idx_minus1 1 can indicate the maximum depth at which the partition of the binary tree is allowed. [0234] [0235] With reference to the example shown in Figure 6, in Figure 6, the division of the binary tree has been performed for an encoding unit having a depth of 2 and an encoding unit having a depth of 3. Consequently, at minus one of the data indicating the number of times the partition of the binary tree has been performed in the unit of the coding tree (i.e., 2 times), the information indicating the maximum depth at which the partition was allowed of the binary tree in the unit of the coding tree (i.e., depth 3), or the number of depths at which the partition of the binary tree was performed in the unit of the coding tree (i.e. 2 (depth 2 and depth 3)) can be encoded / decoded through a bit stream. [0236] [0237] As another example, at least one of the data on the number of times the binary tree partition is allowed, the depth at which the binary tree partition is allowed, or the number of depths at which the partition is allowed Binary tree can be obtained for each sequence or each sector. For example, the information can be encoded in units of a sequence, an image or a division unit and transmitted through a bit stream. Consequently, at least one of the numbers of the binary tree partition in a first segment, the maximum depth at which the binary tree partition is allowed in the first segment, or the number of depths at which the binary tree partition is done in the first segment can be the difference of a second segment. For example, in the first segment, the binary tree partition can be allowed only for one depth, while, in the second segment, the binary tree partition can be allowed for two depths. [0238] [0239] As another example, the number of times the binary tree partition is allowed, the depth at which the binary tree partition is allowed, or the number of depths at which the binary tree partition is allowed can be configured in a way different according to a time level identifier (TemporalID) of a segment or an image. Here, the temporal level identifier (TemporalID) is used to identify each of a plurality of video layers that have a scalability of at least one of view, spatial, temporal or quality. [0240] [0241] As shown in Figure 3, the first coding block 300 with the partition depth (division depth) of k can be divided into multiple second coding blocks based on the quadruple tree. For example, the second coding blocks 310 to 340 can be square blocks that are half the width and half the height of the first coding block, and the partition depth of the second coding block can be increased to k 1. [0242] [0243] The second coding block 310 with the partition depth of k 1 can be divided into multiple third coding blocks with the partition depth of k 2. The partition of the second coding block 310 can be performed selectively using one of the quad tree and the Binary tree depending on a partition method. Here, the partition method can be determined based on at least one of the information indicating the quad-tree based partition and the information indicating the binary tree-based partition. [0244] [0245] When the second coding block 310 is divided according to the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a that are half the width and half the height of the second coding block, and the partition depth of the third coding block 310a can be increased to ak 2. On the contrary, when the second coding block 310 is divided according to the binary tree, the second coding block 310 can be divided into two third coding blocks. Here, each of the two third coding blocks can be a non-square block having an average width and half the height of the second coding block, and the partition depth can be increased to ak 2. The second block of Coding can be determined as a non-square block of a horizontal or vertical direction that depends on a partition address, and the partition address can be determined based on information about whether the binary tree-based partition is performed in a vertical direction or a horizontal direction [0246] [0247] Meanwhile, the second coding block 310 can be determined as a sheet coding block that is no longer partitioned depending on the quad tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transformation block. [0248] [0249] Like the partition of the second coding block 310, the third coding block 310a can be determined as a sheet coding block, or it can be further divided based on the quadruple tree or the binary tree. [0250] [0251] Meanwhile, the third coding block 310b partitioned on the basis of the binary tree can be further divided into the coding blocks 310b-2 of a vertical direction or the coding blocks 310b-3 of a horizontal direction based on the binary tree, and the Partition depth of the relevant coding blocks can be increased to 3k. Alternatively, the third coding block 310b can be determined as a sheet coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transformation block. However, the above partition process can be carried out in a limited way based on at least one of the information about the size / depth of the coding block that allows the quadruple tree-based partition, the information about the size / depth of the coding block that allows the partition based on that binary tree, and the information about the size / depth of the coding block that the binary tree-based partition does not allow. [0252] [0253] A number of a candidate representing a size of a coding block may be limited to a predetermined number, or a size of a coding block in a predetermined unit may have a fixed value. As an example, the size of the coding block in a sequence or in an image can be limited to 256x256, 128x128 or 32x32. Information indicating the size of the coding block in the sequence or in the image can be signaled through a sequence header or an image header. [0254] [0255] As a result of the partition based on a quadruple tree and a binary tree, an encoding unit can be represented as a square or rectangular shape of an arbitrary size. [0256] [0257] An encoding block is encoded using at least one of the jump mode, intraprediction, interprediction or jump method. Once a coding block is determined, a prediction block can be determined through the predictive partition of the coding block. The predictive partition of the coding block can be performed by a partition mode (Part_mode) that indicates a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a size of a prediction block determined according to the partition mode may be equal to or smaller than a size of a coding block. [0258] [0259] Figure 7 is a diagram illustrating a partition mode that can be applied to an coding block when the coding block is encoded by interprediction. [0260] [0261] When an encoding block is encoded by interprediction, it can be apply one of the 8 partition modes to the coding block, as in the example shown in figure 4. [0262] [0263] When an encoding block is encoded by intraprediction, a partition mode PART_2Nx2N or a partition mode PART_NxN can be applied to the coding block. [0264] [0265] PART_NxN can be applied when an encoding block has a minimum size. Here, the minimum size of the coding block can be predefined in an encoder and a decoder. Or, the information about the minimum size of the coding block can be signaled through a bit stream. For example, the minimum size of the coding block can be signaled through a segment header, so that the minimum size of the coding block can be defined by segment. [0266] [0267] In general, a prediction block can have a size of 64 ^ 64 to 4x4. However, when an encoding block is encoded by interprediction, it can be restricted that the prediction block does not have a size of 4x4 to reduce the memory bandwidth when motion compensation is performed. [0268] [0269] Figure 8 is a flow chart illustrating processes of obtaining a residual sample according to an embodiment to which the present invention is applied. [0270] [0271] First, a residual coefficient of a current block S810 can be obtained. A decoder can obtain a residual coefficient through a coefficient scanning method. For example, the decoder can perform coefficient scanning using a zig-zag scan, vertical scan or horizontal scan, and can obtain residual coefficients in the form of a two-dimensional block. [0272] [0273] A reverse quantification can be performed on the residual coefficient of the current block S820. [0274] An inverse transformation is performed selectively depending on whether the inverse transformation in the unquantified residual coefficient of the current block S830 should be omitted. Specifically, the decoder can determine whether the inverse transformation in at least one of the horizontal or vertical direction of the current block should be omitted. When the application of the inverse transformation in at least one of the horizontal or vertical directions of the current block is determined, a residual sample of the current block can be obtained by inversely transforming the unquantified residual coefficient of the current block. Here, the inverse transformation can be performed using at least one of DCT, DST and KLT. [0275] [0276] When the inverse transform is omitted in both the horizontal and vertical direction of the current block, the inverse transformation is not performed in the horizontal and vertical direction of the current block. In this case, the residual sample of the current block can be obtained by scaling the unquantified residual coefficient with a predetermined value. [0277] [0278] Skipping the inverse transform in the horizontal direction means that the inverse transformation is not performed in the horizontal direction, but the inverse transformation is performed in the vertical direction. At this time, the scale can be performed in the horizontal direction. [0279] [0280] Skipping the inverse transform in the vertical direction means that the inverse transformation is not performed in the vertical direction, but that the inverse transformation is performed in the horizontal direction. At this time, the scale can be performed in the vertical direction. [0281] [0282] It can be determined whether or not a reverse transformation jump technique can be used for the current block, depending on the partition type of the current block. For example, if the current block is generated through a partition based on a binary tree, the reverse transformation jump scheme may be restricted for the current block. Therefore, when the current block is generated through the binary tree-based partition, the residual sample of the current block can be obtained by inverse transformation of the current block. Also, when the current block is generated through the binary tree-based partition, the encoding / decoding of the information that indicates whether the inverse transformation is omitted (for example, transform_skip_flag) can be omitted. [0283] [0284] Alternatively, when the current block is generated through the binary tree-based partition, it is possible to limit the reverse transformation jump scheme to at least one of the horizontal or vertical direction. Here, the direction in which the reverse transformation jump scheme is limited can be determined based on the decoded information of the bit stream, or it can be adaptively determined based on at least one of a current block size, a form of the current block, or an intraprediction mode of the current block. [0285] [0286] For example, when the current block is a non-square block that has a width greater than a height, the reverse transformation jump scheme can be allowed only in the vertical direction and restricted in the horizontal direction. That is, when the current block is 2NxN, the inverse transformation is performed in the horizontal direction of the current block, and the inverse transformation can be performed selectively in the vertical direction. [0287] [0288] On the other hand, when the current block is a non-square block that has a height greater than a width, the reverse transformation jump scheme can be allowed only in the horizontal direction and restricted in the vertical direction. That is, when the current block is Nx2N, the inverse transformation is performed in the vertical direction of the current block, and the inverse transformation can be performed selectively in the horizontal direction. [0289] [0290] In contrast to the previous example, when the current block is a non-square block that has a width greater than a height, the reverse transformation jump scheme can be allowed only in the horizontal direction, and when the current block is a non-square block having a height greater than a width, the reverse transformation jump scheme can be allowed only in the vertical direction. [0291] The information indicating whether or not the inverse transform with respect to the horizontal direction should be omitted or the information that indicates whether the inverse transformation with respect to the vertical direction should be omitted can be signaled through a bit stream. For example, the information that indicates whether or not the reverse transformation in the horizontal direction should be omitted is a 1-bit indicator, 'hor_transform_skip_flag', and the information that indicates whether the reverse transformation in the vertical direction should be omitted is an indicator 1-bit 'ver_transform_skip_flag'. The encoder can encode at least one of 'hor_transform_skip_flag' or 'ver_transform_skip_flag' according to the shape of the current block. In addition, the decoder can determine whether or not the reverse transformation in the horizontal direction or in the vertical direction is omitted using at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag". [0292] [0293] It can be configured to skip the reverse transformation for any address of the current block depending on the partition type of the current block. For example, if the current block is generated through a partition based on a binary tree, the reverse transformation in the horizontal or vertical direction can be omitted. That is, if the current block is generated by a binary tree-based partition, it can be determined that the inverse transformation for the current block is omitted in at least one horizontal or vertical direction without encoding / decoding information (for example, transform_skip_flag, hor_transform_skip_flag, see_transform_skip_flag) which indicates whether or not to reverse the current transformation of the current block. [0294] [0295] Figure 9 is a flow chart illustrating an interprediction method according to an embodiment to which the present invention is applied. [0296] [0297] Referring to Figure 9, the movement information of a current block S910 is determined. The movement information of the current block may include at least one of a motion vector related to the current block, a reference image index of the current block, or an interprediction address of the current block. [0298] [0299] The movement information of the current block can be obtained based on in at least one of the information signaled through a bit stream or motion information of a neighboring block adjacent to the current block. [0300] [0301] Figure 10 is a diagram illustrating the processes of deriving motion information from a current block when a fusion mode is applied to the current block. [0302] [0303] If the fusion mode is applied to the current block, a space fusion candidate can be derived from a spatial neighboring block of the current block S1010. The neighboring space block may comprise at least one of the blocks adjacent to a left, an upper part or a corner (for example, at least one of an upper left corner, an upper right corner or a lower left corner) of the current block. [0304] [0305] The motion information of the space fusion candidate can be configured to be the same as the motion information of the space neighbor block. [0306] [0307] A temporary merger candidate may be derived from a temporary neighbor block of the current block S1020. The temporary neighbor block can mean a block included in a placed image. The placed image has a different image order count (POC) from the current image, including the current block. The placed image can be determined as an image that has a predefined index in a list of reference images or can be determined by an index signaled from a bit stream. It can be determined that the temporary neighbor block is a block comprising coordinates in a placed block that has the same position as the current block in the placed image, or a block adjacent to the placed block. For example, at least one of a block that includes the coordinates of the center of the placed block or a block adjacent to the lower left limit of the placed block can be determined as the temporary neighboring block. [0308] [0309] The movement information of the temporary fusion candidate can be determined based on the movement information of the temporary neighbor block. For example, a motion vector of the merger candidate temporal can be determined based on a motion vector of the temporary neighbor block. In addition, an interprediction address of the temporary merger candidate can be configured to be the same as an interprediction address of the temporary neighbor block. However, a reference image index of the temporary merger candidate may have a fixed value. For example, the reference image index of the temporary merger candidate can be set to '0'. [0310] [0311] Thereafter, a list of fusion candidates can be generated that includes the space fusion candidate and the temporary fusion candidate S1030. If the number of fusion candidates included in the list of fusion candidates is less than a maximum number of fusion candidates, a combined fusion candidate that combines two or more fusion candidates may be included in the list of fusion candidates. [0312] [0313] When the list of fusion candidates is generated, at least one of the fusion candidates included in the list of fusion candidates may be specified based on an S1040 fusion candidate index. [0314] [0315] The movement information of the current block can be configured to be equal to the movement information of the fusion candidate specified by the fusion candidate index S1050. For example, when the space fusion candidate is selected by the fusion candidate index, the movement information of the current block can be configured to be the same as the movement information of the space neighbor block. Alternatively, when the temporary merger candidate is selected by the merger candidate index, the movement information of the current block can be configured to be the same as the movement information of the temporary neighbor block. [0316] [0317] Figure 11 is a diagram illustrating processes of derivation of movement information from a current block when an AMVP mode is applied to the current block. [0318] [0319] When AMVP mode is applied to the current block, at least one of the Interprediction addresses of the current block or a reference image index can be decoded from an S1110 bit stream. That is, when AMVP mode is applied, at least one of the interprediction addresses or the reference image index of the current block can be determined based on the information encoded through the bit stream. [0320] [0321] A space motion vector candidate can be determined based on a motion vector of an adjacent space block of the current block S1120. The space motion vector candidate may include at least one of a first space motion vector candidate derived from an upper neighboring block of the current block and a second space motion vector candidate derived from a left neighboring block of the current block. Here, the upper neighboring block may include at least one of the blocks adjacent to an upper or upper right corner of the current block, and the left neighboring block of the current block may include at least one of the blocks adjacent to a left or a left corner bottom of the current block. A block adjacent to an upper left corner of the current block can be treated as the upper neighboring block or as the left neighboring block. [0322] [0323] When the reference images between the current block and the neighboring spatial block are different from each other, it is also possible to obtain the spatial motion vector by scaling the motion vector of the neighboring spatial block. [0324] [0325] A candidate temporary motion vector can be determined based on a motion vector of a temporary neighbor block of the current block S1130. When the reference images between the current block and the temporary neighbor block are different from each other, it is also possible to obtain the temporary motion vector by scaling the motion vector of the temporary neighboring block. [0326] [0327] A list of motion vector candidates can be generated that includes the space motion vector candidate and the temporal movement vector candidate S1140. [0328] When the list of motion vector candidates is generated, at least one of the motion vector candidates included in the list of motion vector candidates can be specified based on the information specified by at least one of the list of vector candidates of movement S1150. [0329] [0330] The candidate motion vector specified by the information can be set as a motion vector prediction value of the current block, and a motion vector difference value can be added to the motion vector prediction value to obtain a motion vector of the current block S1160. At this time, the difference value of the motion vector can be analyzed through the bit stream. [0331] [0332] When the movement information of the current block is obtained, the movement compensation for the current block can be performed based on the movement information obtained S920. More specifically, motion compensation for the current block can be performed depending on the direction of interprediction, the reference image index and the motion vector of the current block. [0333] [0334] The interprediction address may indicate N addresses. Here, N is a natural number and can be 1, 2 or 3 or more. If the interprediction address indicates N addresses, it means that the interprediction of the current block is performed based on N reference images or N reference blocks. For example, when the interprediction address of the current block indicates a unique address, the interprediction of the current block can be performed based on a reference image. On the other hand, when the interprediction of the current block indicates a double direction, the interprediction of the current block can be performed using two reference images or two reference blocks. [0335] [0336] It is also possible to determine whether a multidirectional prediction is allowed for the current block based on at least one of a size or shape of the current block. For example, when a coding unit has a square shape, multidirectional prediction is allowed for its encoding / decoding On the other hand, when the coding unit has a non-square shape, only unidirectional prediction is allowed for coding / decoding. Contrary to the previous cases, it is also possible to establish that multidirectional prediction is allowed to encode / decode the coding unit when it is non-square, and only one-way prediction is allowed for coding / decoding the coding unit when It has the square shape. Alternatively, it is also possible to establish that multidirectional prediction is not allowed to encode / decode a prediction unit, when the prediction unit has the non-square shape of 4x8 or 8x4 or the like. [0337] [0338] The index of the reference image can specify a reference image that will be used for the interprediction of the current block. Specifically, the reference image index can specify any of the reference images included in the reference image list. For example, when the interprediction address of the current block is bidirectional, the reference image (reference image L0) included in the list of reference images L0 is specified by a reference image index L0, and the reference image ( reference image L1) included in the list of reference images L1 is specified by an index of reference images L1. [0339] [0340] Alternatively, a reference image can be included in two or more reference image lists. Consequently, even if the reference image index of the reference image included in the reference image list L0 and the reference image index of the reference image included in the reference image list L1 are different, the Temporary orders (image order count, POC) of both reference images may be the same. [0341] [0342] The motion vector can be used to specify a position of a reference block, in the reference image, corresponding to a prediction block of the current block. The interprediction of the current block can be performed depending on the reference block, specified by the vector movement, in the reference image. For example, an integer pixel included in the reference block or a non-integer pixel generated by interpolating whole pixels can be generated as a prediction sample of the current block. It is also possible that the reference blocks specified by different motion vectors are included in the same reference image. For example, when the reference image selected from the reference image list L0 and the reference image selected from the reference image list L1 are equal, the reference block specified by a motion vector L0 and the reference block specified by a motion vector L1 can be included in the same reference image. [0343] [0344] As described above, when the interprediction direction of the current block indicates two or more directions, motion compensation for the current block can be performed based on two or more reference images or two or more reference blocks. [0345] [0346] For example, when the current block is coded with bidirectional prediction, the prediction block of the current block can be obtained based on two reference blocks obtained from two reference images. In addition, when the current block is coded with bidirectional prediction, a residual block indicating the difference between an original block and the prediction block obtained based on the two reference blocks can be encoded / decoded. [0347] [0348] When two or more reference images are used, motion compensation for the current block can be performed by applying the same or different weights to the respective reference images. Hereinafter, a method for making a weighted prediction in the current block will be described in detail in the following embodiments when the middle interprediction address indicates two or more addresses. For convenience of explanation, it is assumed that the address of interprediction of the current block is bidirectional. However, even when the interprediction address of the current block indicates three or more, the following embodiment can be applied with the application. In addition, motion compensation for The current block that uses two prediction images will be called the two-way prediction method or the two-way prediction coding / decoding method. [0349] [0350] When bidirectional prediction is applied to the current block, the reference images used for bidirectional prediction of the current block can include an image whose temporal order (image order count, POC) is earlier than the current image, an image whose order Temporary is after the current image, or the current image. For example, one of the two reference images may be an image whose temporal order is prior to the current image, and the other image may be an image whose temporal order is subsequent to the current image. Alternatively, one of the two reference images may be the current image, and the other image may be an image whose temporal order is prior to the current block or whose temporal order is subsequent to the current image. Alternatively, both reference images may have temporary orders prior to the current image, or they may have temporary orders subsequent to the current image. Alternatively, both reference images may be the current image. [0351] [0352] Two prediction blocks can be generated from each of the two lists of reference images. For example, a prediction block based on the reference image L0 can be generated based on the motion vector L0, and a prediction block based on the reference image L1 can be generated based on the motion vector L1. It is also possible that the prediction block generated by the motion vector L0 and the prediction block generated by the motion vector L1 can be generated based on the same reference image. [0353] [0354] A prediction block of the current block can be obtained based on an average value of the prediction blocks generated based on both reference images. For example, Equation 1 shows an example of how to obtain the prediction block of the current block based on the average value of a plurality of prediction blocks. [0355] [0356] [Equation 1] [0357] P ( x) = / 2 * P 0 ( x) l / 2 * P l ( x) [0358] [0359] In Equation 1, P (x) indicates a final prediction sample of the current block or a bidirectional prediction sample, and PN (x) indicates a sample value of an LN prediction block generated based on a reference image LN For example, P0 (x) can mean a prediction sample of the prediction block generated based on the reference image L0, and Pi (x) can mean a prediction sample of the prediction block generated based on the reference image L1. That is, according to Equation 1, the final prediction block of the current block can be obtained based on the weighted sum of the plurality of the prediction blocks generated based on the plurality of the reference images. At this time, a weighting of a predefined fixed value in the encoder / decoder can be assigned to each prediction block. [0360] [0361] According to an embodiment of the present invention, the final prediction block of the current block is obtained based on the weighted sum of a plurality of prediction blocks, and the weighting assigned to each prediction block can be determined in a variable / adaptive manner. For example, when both reference images or both prediction blocks have a different brightness, it is more effective to make a two-way prediction for the current block by applying different weights to each of the prediction blocks than to make the two-way prediction for the current block by the average of the prediction blocks. Hereinafter, for the convenience of the explanation, the bidirectional prediction method, when the prediction assigned to each of the prediction blocks is determined in a variable / adaptive manner, will be called "bidirectional weighted prediction". [0362] [0363] It is also possible to determine whether or not a bidirectional weighted prediction is allowed for the current block based on at least one of a size or shape of the current block. For example, if the coding unit has a square shape, it is allowed to encode / decode using the two-way weighted prediction, while, if the coding unit has a non-square shape, it is not allowed to encode / decode it using the prediction weighted bidirectional. Unlike the previous cases, It is also possible to establish that the coding block is allowed to be encoded / decoded using bidirectional weighted prediction when it has a non-square shape, and it is not possible to encode / decode the coding block that uses bidirectional weighted prediction when it has the square shape. Alternatively, it is also possible to establish that bidirectional weighted prediction is not allowed to encode / decode the prediction unit when the prediction unit is a non-square partition having a size of 4x8 or 8x4 or the like. [0364] [0365] Figure 12 is a diagram of a two-way weighted prediction method, in accordance with an embodiment of the present invention. [0366] [0367] A weighted prediction parameter for the current block S1210 can be determined to perform the bi-directional weighted prediction. The weighted prediction parameter can be used to determine the weighting that will be applied to both reference images. For example, as shown in Figure 13, a weighting of 1-wa can be applied to a prediction block generated based on a reference image L0, and a weighting of wa can be applied to a prediction block generated based on a reference image L1. Based on the weighted prediction parameters, the weighting that will be applied to each prediction block is determined S1220, and a weighted sum operation of a plurality of prediction blocks is performed based on the weighting determined to generate a prediction block end of current block S1230. For example, the final prediction block of the current block can be generated based on the following Equation 2. [0368] [0369] [Equation 2] [0370] Jp (^) = (l - w) * jp 0 (x) w:! Cjp 1 (x) [0371] [0372] In equation 2, w represents the weighted prediction parameter. [0373] [0374] As shown in Equation 2, the final prediction block P (x) of the current block can be obtained by assigning the 1-w weighting to the prediction block P0 and assigning the weighting w to the prediction block Pi. [0375] It is also possible to assign the weighting of w to the prediction block P0 and assign the weighting of 1-w to the prediction block P ^ unlike that shown in Equation 2. [0376] [0377] The weighted prediction parameter can be determined based on the difference in brightness between the reference images, or it can be determined based on the distance between the current image and the reference images (i.e., the POC difference). Alternatively, it is also possible to determine the weighted prediction parameter according to the size or shape of the current block. [0378] [0379] The weighted prediction parameter can be determined in units of a block (for example, a coding tree unit, a coding unit, a prediction unit or a transformation unit), or it can be determined in units of a segment or an image . [0380] [0381] At this time, the weighted prediction parameter can be determined based on the predefined candidate weighted prediction parameters. As an example, it can be determined that the weighted prediction parameter is one of the predefined values such as -1/4, 1/4, 3/8, 1/2, 5/8, 3/4 or 5/4. [0382] [0383] Alternatively, after determining a set of weighted prediction parameters for the current block, it is also possible to determine the weighted prediction parameter of at least one of the candidate weighted prediction parameters included in the set of determined weighted prediction parameters. The set of weighted prediction parameters can be determined in units of a block (for example, a coding tree unit, a coding unit, a prediction unit or a transformation unit), or it can be determined in units of a segment or an image. [0384] [0385] For example, if one of the sets of weighted prediction parameters w0 and w1 is selected, at least one of the candidate weighted prediction parameters included in the set of prediction parameters Selected weights can be determined as the weighted prediction parameter for the current block. For example, it is assumed as' w0 = {-1/4, 1/4, 3/8, 1/2, 5/8, 3/4, 5/4} 'and' w1 = {-3/8, 4, 3/8, 1/2, 5/8, 3/4} '. When the set of weighted prediction parameters w0 is selected, the weighted prediction parameter w of the current block can be determined as one of the candidate weighted prediction parameters -1/4, 1/4, 3/8, 1/2, 5/8, 3/4 and 5/4 included in the w0. [0386] [0387] The set of weighted prediction parameters available for the current block can be determined according to a temporal order or a temporal address of the reference image used for bidirectional prediction. The temporary order can indicate an order of encoding / decoding between images, or it can indicate an order of output (for example, POC) of the images. In addition, the temporary address can indicate whether the temporal order of the reference image is before or after the current image. [0388] [0389] As an example, depending on whether two reference images used for bidirectional prediction have the same temporal order, the set of weighted prediction parameters available for the current image can be determined. For example, depending on whether the reference image L0 and the reference image L1 are the same image (ie, the temporal order of the images is the same) or if the reference image L0 and the reference image L1 are different with each other (that is, the temporal orders of the images are different), the weighted prediction parameter available for the current block can be determined in a variable way. [0390] [0391] Different sets of weighted prediction parameters may mean that at least one of an absolute value, a sign or a number of weighted prediction parameters included in each set of weighted prediction parameters are different. For example, when the temporal addresses of the reference image L0 and the reference image L1 are the same, the set of weighted prediction parameters w0 = {-1/4, 1/4, 3/8, 1/2, 5/8, 5/4} can be used, and when the temporary addresses of the reference image L0 and the reference image L1 are different, the weighted prediction parameters set w1 = {-3/8, -1/4, 1/4, 3/8, 1/2, / 8, 3/4} can be used. [0392] [0393] As an example, depending on whether the temporal addresses of the two reference images used in bidirectional prediction are the same or not, the set of weighted prediction parameters available for the current image can be determined. For example, the set of weighted prediction parameters available for the current block can be determined differently when the temporal addresses of the two reference images are the same and when the temporal addresses of the two reference images are different. Specifically, the weighted prediction parameter of the current block can be determined differently depending on whether both the reference image L0 and the reference image L1 are prior to the current image, whether the reference image L0 or the reference image L1 they are or not later than the current image, or if the temporary addresses of the reference image L0 and the reference image L1 are different or not. [0394] [0395] The number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets can be set differently for each block, each segment or each image. For example, the number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets can be indicated in units of a portion. Accordingly, the number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets may be different for each segment. [0396] [0397] The weighted prediction parameter can be derived from a neighboring block adjacent to the current block. Here, the neighboring block adjacent to the current block may include at least one of a spatial neighboring block or a temporary neighboring block of the current block. [0398] [0399] As an example, the weighted prediction parameter of the current block can be set to a minimum value or a maximum value between the parameters Weighted prediction of neighboring blocks adjacent to the current block, or can be set to an average value of the weighted prediction parameters of neighboring blocks. [0400] [0401] As an example, the weighted prediction parameter of the current block can be derived from a neighboring block located in a predetermined position between the neighboring blocks adjacent to the current block. Here, the default position can be determined in a variable or fixed way. Specifically, the position of the neighboring block is determined by a size of the current block (for example, a coding unit, a prediction unit or a transformation unit), a position of the current block in the unit of the coding tree, a shape of the current block (for example, a partition type of the current block), or a partition index of the current block. Alternatively, the position of the neighboring block can be predefined in the encoder / decoder and determined in a fixed manner. [0402] [0403] As an example, the weighted prediction parameter of the current block can be derived from a neighboring block to which the bidirectional weighted prediction between neighboring blocks adjacent to the current block is applied. Specifically, the weighted prediction parameter of the current block can be derived from a weighted prediction parameter of a first detected neighboring block to which the bidirectional weighted prediction is applied when the neighboring blocks adjacent to the current block are scanned in a predetermined order. Figure 14 is a diagram illustrating a scanning order between neighboring blocks. In Fig. 14, scanning is performed in the order of a left neighbor block, an upper neighbor block, a right upper neighbor block, a left lower neighbor block and a left upper neighbor block, but the present invention is not limited to the example illustrated. When scanning is performed in the predefined order, the weighted prediction parameter of the first neighbor block detected in which the bidirectional weighted prediction can be used as the weighted prediction parameter of the current block. [0404] [0405] Alternatively, when scanning is done in the predefined order, it is also possible to set the prediction parameter weighted of the first neighbor block detected to which the weighted bidirectional prediction is applied as the prediction value of the weighted prediction parameter of the current block. In this case, the weighted prediction parameter of the current block can be obtained using the prediction value of the weighted prediction parameter and the residual value of the weighted prediction parameter. [0406] [0407] As an example, it is also possible to derive the weighted prediction parameter of the current block from a spatial or temporal neighboring block combined with motion information from the current block, or from a spatial or temporal neighboring block used to derive a motion vector prediction value of the current block. [0408] [0409] It is also possible to indicate information to determine the weighted prediction parameter through a bit stream. For example, the weighted prediction parameter of the current block can be determined based on at least one of the information indicating a value of the weighted prediction parameter, the index information specified by one of the candidate weighted prediction parameters, or the information of set index that specifies one of the sets of weighted prediction parameters. [0410] [0411] By binarizing and encoding the weighted prediction parameters, the smallest binary code word can be assigned to a weighted prediction parameter that has the highest frequency of use statistically. For example, truncated unary binarization can be performed on the weighted prediction parameter as shown in Table 1 below. Table 1 is an example in case cMax is 6. [0412] [0413] [Table 1] [0414] [0415] [0416] [0417] [0418] The truncated unary binarization method shown in Table 1 is basically the same as a unary binarization method, except that a conversion is made after receiving the maximum value (cMax) of the input in advance. Table 2 shows truncated unary binarization with cMax of 13. [0419] [0420] [Table 2] [0421] [0422] [0423] [0424] [0425] During binarization of the weighted prediction parameter, it is also possible to use different binary code words depending on whether the temporal addresses of the reference images used for bidirectional prediction are the same or not. For example, Table 3 illustrates binary code words according to whether the temporary addresses of the reference image L0 and the reference image L1 are the same or not. [0426] [Table 3] [0427] [0428] [0429] [0430] [0431] It is also possible to determine the weighting prediction parameter of the current block according to a time order difference between the current image and the reference image. Here, the temporal order difference may indicate the encoding / decoding order difference between images or the output order difference between images (for example, a POC difference value). For example, the weighted prediction parameter of the current image can be determined based on at least one of the POC difference values between the current image and the reference image L0 (hereafter referred to as the first reference distance) and the POC difference value between the current image and the reference image L1 (hereinafter referred to as the second reference distance). [0432] [0433] Specifically, the weighted prediction parameter of the current block can be determined based on a relationship between the first reference distance and the second reference distance. When the first reference distance is w and the second reference distance is h, w / (wh) can be used as the weighted prediction parameter of the current block. For example, when the first reference distance and the second reference distance are the same, the weighted prediction parameter of the current block can be determined as 1/2. Also, when the first distance of reference is 1 and the second reference distance is 3, the weighted prediction parameter of the current block can be determined as 1/4. [0434] [0435] Alternatively, when the first reference distance is w and the second reference distance is h, it is also possible to use a candidate weighted prediction parameter that has a more similar value aw / (wh) between the candidate weighted prediction parameters as the parameter of weighted prediction of the current block. [0436] [0437] Alternatively, it is also possible to binarize the weighted prediction parameter of the current block considering the first reference distance and the second reference distance. Table 4 shows binary code words based on the first reference distance and the second reference distance. [0438] [0439] [Table 4] [0440] [0441] [0442] [0443] [0444] In the example shown in Table 4, when the first reference distance and the second reference distance are equal, the probability that the weighted prediction parameter is set to 1/2 is high. As a result, the smallest code word can be assigned to 1/2 when the first reference distance and the second reference distance are the same. [0445] [0446] When the first reference distance and the second reference distance are different, the smallest binary code word can be assigned to the weighted prediction parameter that is statistically the most used. For example, when the first reference distance is greater than the second reference distance, the probability of a higher weight being assigned to the reference image L1 is high. Therefore, the smallest binary code word can be assigned to the weighted prediction parameter greater than 1/2. On the other hand, when the first reference distance is smaller than the second reference distance, the probability that a greater weight is assigned to the reference image L0 is high. Therefore, the smallest binary code word can be assigned to the weighted prediction parameter less than 1/2. [0447] [0448] In contrast to the example shown in Table 4, it is also possible to assign the smallest binary code word to the weighted prediction parameter less than 1/2 when the first reference distance is greater than the second reference distance, and assign the smallest binary code word to the weighted prediction parameter greater than 1/2 when the first reference distance is less than the second reference distance. [0449] [0450] Although the embodiments described above have been described on the basis of a series of stages or flowcharts, they do not limit the order of the series. of time of the invention, and can be performed simultaneously or in different orders as necessary. In addition, each of the components (for example, units, modules, etc.) that constitute the block diagram in the embodiments described above can be implemented by a hardware or software device, and a plurality of components. Or a plurality of components can be combined and implemented by a single hardware or software device. The embodiments described above can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer-readable recording medium. The computer readable recording medium may include one or a combination of program commands, data files, data structures and the like. Examples of computer-readable media include magnetic media such as hard drives, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floppy disks, media and hardware devices specifically configured to store and execute the instructions of the program, such as ROM, RAM, flash memory, etc. The hardware device can be configured to function as one or more software modules to perform the process according to the present invention, and vice versa. [0451] [0452] Industrial applicability [0453] [0454] The present invention can be applied to electronic devices that can encode / decode a video.
权利要求:
Claims (9) [1] 1. A method for decoding a video, the method comprising: obtaining a first motion vector and a second motion vector of a current block; obtaining a first prediction block based on the first motion vector and a first reference image, and a second prediction block based on the second motion vector and a second reference image; obtain a value of a weighted prediction parameter of the current block; determine, based on the value of the weighted prediction parameter, weights that apply to the first prediction block and the second prediction block; Y obtain, based on a weighted sum of the first prediction block and the second prediction block, a third prediction block of the current block; where: the value of the weighted prediction parameter is obtained based on index information signaled by a data stream, The index information specifies one of the values of the candidate weighted prediction parameter; Y A length of the index information is determined by the temporary addresses of the first reference image and the second reference image. [2] 2. The method according to claim 1, wherein the index information is binarized with a truncated unary binarization. [3] 3. The method of claim 1, further comprising: determine a set of weighted prediction parameter values of the current block among a plurality of sets of weighted prediction parameter values, where a number of candidate weighted prediction parameter values included in the set of weighted prediction parameter values is different from another set of weighted prediction parameter values; Y where one of the candidate weighted prediction parameter values included in the set of weighted prediction parameter values is determined as the weighted prediction parameter value of the current block. [4] 4. The method of claim 3, wherein the set of weighted prediction parameter values is determined by the temporal addresses of the first reference image and the second reference image. [5] 5. A method to encode a video, the method comprising: obtain a first motion vector and a second motion vector of a current block; obtaining a first prediction block based on the first motion vector and a first reference image, and a second prediction block based on the second motion vector and a second reference image; determine, based on a value of a weighted prediction parameter of the current block, weights that apply to the first prediction block and a second prediction block; generate, based on a weighted sum of the first prediction block and the second prediction block, a third prediction block of the current block; Y encode index information by specifying the value of the weighted prediction parameter among a plurality of candidate weighted prediction parameter values, where a length of the index information is determined by the temporary addresses of the first reference image and the second reference image. [6] 6. The method of claim 5, wherein the index information is binarized with a truncated unary binarization. [7] 7. The method of claim 5, wherein the length of the index information is determined based on whether the temporary addresses are equal. [8] 8. The method of claim 5, wherein the method comprises determining a set of weighted prediction parameter values of the current block among a plurality of sets of weighted prediction parameter values, wherein a number of prediction parameter values Weighted candidates included in the set of weighted prediction parameter values is different from another set of weighted prediction parameter values; Y wherein one of the values of the candidate weighted prediction parameters included in the set of weighted prediction parameter values is determined as the weighted prediction parameter value of the current block. [9] 9. An apparatus for decoding a video, the apparatus comprising: a decoding unit to obtain a value of a weighted prediction parameter of a current block; Y a prediction unit for obtaining a first motion vector and a second motion vector of a current block, obtaining a first prediction block based on the first motion vector and a first reference image, and a second prediction block based on the second motion vector and a second reference image, determine the weights that are applied to the first prediction block generated based on a first reference image and the second prediction block based on the value of the weighted prediction parameter, to generate a third prediction block of the current block based on a weighted sum of the first prediction block and the second prediction block, where: the value of the weighted prediction parameter is obtained based on index information signaled by a data stream, The index information specifies one of the values of the candidate weighted prediction parameter; Y A length of the index information is determined by the temporary addresses of the first reference image and the second reference image.
类似技术:
公开号 | 公开日 | 专利标题 ES2703607B2|2021-05-13|Method and apparatus for processing video signals ES2692864B1|2019-10-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNS ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ES2724568B2|2021-05-19|Method and apparatus for processing a video signal ES2699691A2|2019-02-12|Video signal processing method and device ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2710234B1|2020-03-09|Procedure and device for processing video signals ES2699749B2|2020-07-06|Method and apparatus for processing a video signal ES2688624A2|2018-11-05|Method and apparatus for processing video signal ES2711474A2|2019-05-03|Method and device for processing video signal ES2677193B1|2019-06-19|Procedure and device to process video signals ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL ES2703458A2|2019-03-08|Video signal processing method and device ES2711230A2|2019-04-30|Method and apparatus for processing video signal ES2711223A2|2019-04-30|Method and device for processing video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2711209A2|2019-04-30|Method and device for processing video signal
同族专利:
公开号 | 公开日 EP3484158A2|2019-05-15| CN109479149A|2019-03-15| US11190770B2|2021-11-30| WO2018008904A2|2018-01-11| US20210377534A1|2021-12-02| ES2786077A2|2020-10-08| ES2737843R1|2020-05-08| ES2786077R1|2021-08-05| EP3484158A4|2019-12-25| ES2699749A2|2019-02-12| US20190158835A1|2019-05-23| WO2018008904A3|2018-08-09| ES2699749B2|2020-07-06| ES2699749R1|2019-06-21| ES2737843B2|2021-07-15| KR20180005119A|2018-01-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 CN101379829B|2006-02-02|2016-05-18|汤姆逊许可公司|Be used for the method and apparatus of the adaptive weighted selection of motion compensated prediction| ZA200900032B|2006-07-07|2010-03-31|Ericsson Telefon Ab L M|Video data management| EP2039171B1|2006-07-07|2016-10-05|Telefonaktiebolaget LM Ericsson |Weighted prediction for video coding| KR101408698B1|2007-07-31|2014-06-18|삼성전자주식회사|Method and apparatus for encoding/decoding image using weighted prediction| US8995526B2|2009-07-09|2015-03-31|Qualcomm Incorporated|Different weights for uni-directional prediction and bi-directional prediction in video coding| CN103563382A|2011-03-11|2014-02-05|三星电子株式会社|Method and apparatus for encoding images and method and apparatus for decoding images| KR101418096B1|2012-01-20|2014-07-16|에스케이 텔레콤주식회사|Video Coding Method and Apparatus Using Weighted Prediction| US9143781B2|2012-04-03|2015-09-22|Qualcomm Incorporated|Weighted prediction parameter coding| US9906786B2|2012-09-07|2018-02-27|Qualcomm Incorporated|Weighted prediction mode for scalable video coding| WO2014107074A1|2013-01-04|2014-07-10|삼성전자 주식회사|Motion compensation method and device for encoding and decoding scalable video| US9930363B2|2013-04-12|2018-03-27|Nokia Technologies Oy|Harmonized inter-view and view synthesis prediction for 3D video coding| ES2802817A2|2016-07-05|2021-01-21|Kt Corp|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL |WO2019147079A1|2018-01-25|2019-08-01|주식회사 윌러스표준기술연구소|Method and apparatus for video signal processing using sub-block based motion compensation| US20220038707A1|2018-09-22|2022-02-03|Lg Electronics Inc.|Method and apparatus for processing video signals on basis of inter prediction| WO2020101429A1|2018-11-16|2020-05-22|삼성전자 주식회사|Image encoding and decoding method using bidirectional prediction, and image encoding and decoding apparatus| US11134246B2|2019-01-02|2021-09-28|Shumaker & Sieffert, P.A.|Weighted prediction for video coding| WO2021054720A1|2019-09-16|2021-03-25|엘지전자 주식회사|Image encoding/decoding method and device using weighted prediction, and method for transmitting bitstream|
法律状态:
2020-01-16| BA2A| Patent application published|Ref document number: 2737843 Country of ref document: ES Kind code of ref document: A2 Effective date: 20200116 | 2020-05-08| EC2A| Search report published|Ref document number: 2737843 Country of ref document: ES Kind code of ref document: R1 Effective date: 20200430 | 2021-07-15| FG2A| Definitive protection|Ref document number: 2737843 Country of ref document: ES Kind code of ref document: B2 Effective date: 20210715 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160085011|2016-07-05| KR20160085013|2016-07-05| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|